Self-supervised learning (SSL) learns useful representations from unlabelled data by training networks to be invariant to pairs of augmented versions of the same input. Non-contrastive methods avoid collapse either by directly regularizing the covariance matrix of network outputs or through asymmetric loss architectures, two seemingly unrelated approaches. Here, by building on DirectPred, we lay out a theoretical framework that reconciles these two views. We derive analytical expressions for the representational learning dynamics in linear networks. By expressing them in the eigenspace of the embedding covariance matrix, where the solutions decouple, we reveal the mechanism and conditions that provide implicit variance regularization. These insights allow us to formulate a new isotropic loss function that equalizes eigenvalue contribution and renders learning more robust. Finally, we show empirically that our findings translate to nonlinear networks trained on CIFAR-10 and STL-10.
translated by 谷歌翻译
Coordinate-based implicit neural networks, or neural fields, have emerged as useful representations of shape and appearance in 3D computer vision. Despite advances however, it remains challenging to build neural fields for categories of objects without datasets like ShapeNet that provide canonicalized object instances that are consistently aligned for their 3D position and orientation (pose). We present Canonical Field Network (CaFi-Net), a self-supervised method to canonicalize the 3D pose of instances from an object category represented as neural fields, specifically neural radiance fields (NeRFs). CaFi-Net directly learns from continuous and noisy radiance fields using a Siamese network architecture that is designed to extract equivariant field features for category-level canonicalization. During inference, our method takes pre-trained neural radiance fields of novel object instances at arbitrary 3D pose, and estimates a canonical field with consistent 3D pose across the entire category. Extensive experiments on a new dataset of 1300 NeRF models across 13 object categories show that our method matches or exceeds the performance of 3D point cloud-based methods.
translated by 谷歌翻译
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy's performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
translated by 谷歌翻译
Language is one of the primary means by which we describe the 3D world around us. While rapid progress has been made in text-to-2D-image synthesis, similar progress in text-to-3D-shape synthesis has been hindered by the lack of paired (text, shape) data. Moreover, extant methods for text-to-shape generation have limited shape diversity and fidelity. We introduce TextCraft, a method to address these limitations by producing high-fidelity and diverse 3D shapes without the need for (text, shape) pairs for training. TextCraft achieves this by using CLIP and using a multi-resolution approach by first generating in a low-dimensional latent space and then upscaling to a higher resolution, improving the fidelity of the generated shape. To improve shape diversity, we use a discrete latent space which is modelled using a bidirectional transformer conditioned on the interchangeable image-text embedding space induced by CLIP. Moreover, we present a novel variant of classifier-free guidance, which further improves the accuracy-diversity trade-off. Finally, we perform extensive experiments that demonstrate that TextCraft outperforms state-of-the-art baselines.
translated by 谷歌翻译
预后和健康管理(PHM)是一个新兴领域,由于其带来的好处和效率,它引起了制造业的广泛关注。剩余的使用寿命(RUR)预测是任何PHM系统的核心。最新数据驱动的研究要求大量标记的培训数据可以在有监督的学习范式下培训表现模型之前。在这里,转移学习(TL)和域适应(DA)方法介入并使我们有可能将监督模型概括为具有不同数据分布的其他没有标记数据的其他域。在本文中,我们提出了一种基于编码的模型(变压器),该模型(变压器)具有诱导的瓶颈,使用最大平均差异(MMD)的潜在对齐,并提出了歧管学习,以解决无监督的同质域的问题适应Rul预测。 \ textit {lama-net}使用NASA使用C-Mapss Turbofan引擎数据集验证,并将其与DA的其他最新技术进行了比较。结果表明,所提出的方法提供了一种有希望的方法来在RUL预测中进行域适应。一旦纸张退出审查,将提供代码。
translated by 谷歌翻译
我们提出了ShapeCrafter,这是一个用于递归文本条件3D形状生成的神经网络。生成文本条件的3D形状的现有方法会消耗整个文本提示,以在一个步骤中生成3D形状。但是,人类倾向于递归描述形状,我们可能以初始描述开始,并根据中间结果逐步添加细节。为了捕获此递归过程,我们引入了一种生成以初始短语为条件的3D形状分布的方法,该方法随着添加更多短语而逐渐发展。由于现有的数据集不足以训练这种方法,因此我们提出了Text2Shape ++,这是一个支持递归形状生成的369K形状文本对的大数据集。为了捕获通常用于完善形状描述的本地细节,我们建立在矢量定量的深层隐式函数的基础上,从而产生高质量形状的分布。结果表明,我们的方法可以生成与文本描述一致的形状,并且随着添加更多短语,形状逐渐发展。我们的方法支持形状编辑,外推,并可以在人机合作中为创意设计提供新的应用程序。
translated by 谷歌翻译
我们为表格数据(门)(门)提出了一种新颖的高性能,参数和计算有效的深度学习体系结构。Gate使用GRU启发的门控机制作为具有内置特征选择机制的功能表示学习单元。我们将其与一组不同的非线性决策树结合在一起,并以简单的自我注意力重新加权,以预测我们所需的输出。我们证明,通过在几个公共数据集(分类和回归)上进行实验,GATE是SOTA方法的竞争替代方法。该纸张一旦审查,该代码将立即上传。
translated by 谷歌翻译
在计算和数据方面,大型语言模型的预培训通常需要大量资源。经常使用的Web源(例如Common Crawl)可能包含足够的噪声,以使这种预训练的亚地区。在这项工作中,我们尝试了西班牙语版本的MC4的不同采样方法,并提出了一种新颖的以数据为中心的技术,我们将其命名为$ \ textit {Perplexity sampling} $,该技术可实现大约一半的语言模型的预培训步骤并使用五分之一的数据。最终的模型与当前的最新机构相当,甚至可以为某些任务获得更好的结果。我们的工作证明了变形金刚的多功能性,并为小型团队以有限的预算培训模型铺平了道路。我们的型号可在此$ \ href {https://huggingface.co/bertin-project} {url} $中获得。
translated by 谷歌翻译
当需要域专家来执行复杂的机器学习任务的数据注释时,减少注释工作对于缩短时间和费用至关重要。对于没有可用注释的情况,一种方法是利用特征空间的结构进行基于聚类的活动学习(AL)方法。但是,这些方法在很大程度上取决于样品在特征空间中的组织方式以及使用哪种距离度量。无监督的方法,例如对比性预测编码(CPC),可以潜在地用于学习有组织的特征空间,但是这些方法通常会产生高维特征,这对于估计数据密度可能具有挑战性。在本文中,我们将CPC和多个维度降低方法结合在一起,以搜索基于聚类的AL的功能实践。我们用于模拟语音情感识别系统部署的实验表明,该特征空间的本地和全球拓扑都可以成功用于AL,并且CPC可用于改善基于聚类的AL性能,而不是传统信号功能。此外,我们观察到,压缩数据维度并不损害AL性能,并且当注释数量不是很低时,2-D特征表示与高维表示相似。
translated by 谷歌翻译
制造物体的3D模型对于填充虚拟世界和视觉和机器人技术的合成数据很重要。为了最有用,应该阐明此类对象:它们的部分应在与之互动时移动。尽管存在铰接式对象数据集,但创建它们是劳动密集型的。基于学习的零件动作预测可以有所帮助,但是所有现有方法都需要带注释的培训数据。在本文中,我们提出了一种无监督的方法,用于发现部分分段的3D形状集合中的铰接运动。我们的方法基于我们称之为闭合的概念:对象的部分的任何有效表达都应将对象保留在同一语义类别中(例如,椅子保持椅子)。我们使用一种算法来实现此概念,该算法优化了形状的零件运动参数,从而可以转换为集合中的其他形状。我们通过使用Partnet-Mobility数据集重新发现零件动作来评估我们的方法。对于几乎所有形状类别,我们方法的预测运动参数在地面真实注释方面的错误较低,表现优于两种监督运动预测方法。
translated by 谷歌翻译